最佳决策要求分类器产生与其经验准确性一致的不确定性估计。然而,深度神经网络通常在他们的预测中受到影响或过度自信。因此,已经开发了方法,以改善培训和后HOC期间的预测性不确定性的校准。在这项工作中,我们提出了可分解的损失,以改善基于频流校准误差估计底层的钻孔操作的软(连续)版本的校准。当纳入训练时,这些软校准损耗在多个数据集中实现最先进的单一模型ECE,精度低于1%的数量。例如,我们观察到ECE的82%(相对于HOC后射出ECE 70%),以换取相对于CIFAR-100上的交叉熵基线的准确性0.7%的相对降低。在培训后结合时,基于软合成的校准误差目标会改善温度缩放,一种流行的重新校准方法。总体而言,跨损失和数据集的实验表明,使用校准敏感程序在数据集移位下产生更好的不确定性估计,而不是使用跨熵损失和后HOC重新校准方法的标准做法。
translated by 谷歌翻译
Topological data analysis (TDA) is a branch of computational mathematics, bridging algebraic topology and data science, that provides compact, noise-robust representations of complex structures. Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture, resulting in high-dimensional, difficult-to-interpret internal representations of input data. As DNNs become more ubiquitous across multiple sectors of our society, there is increasing recognition that mathematical methods are needed to aid analysts, researchers, and practitioners in understanding and interpreting how these models' internal representations relate to the final classification. In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification. We use two common TDA approaches to explore several methods for modeling hidden-layer activations as high-dimensional point clouds, and provide experimental evidence that these point clouds capture valuable structural information about the model's process. First, we demonstrate that a distance metric based on persistent homology can be used to quantify meaningful differences between layers, and we discuss these distances in the broader context of existing representational similarity metrics for neural network interpretability. Second, we show that a mapper graph can provide semantic insight into how these models organize hierarchical class knowledge at each layer. These observations demonstrate that TDA is a useful tool to help deep learning practitioners unlock the hidden structures of their models.
translated by 谷歌翻译
In this paper, we propose and showcase, for the first time, monocular multi-view layout estimation for warehouse racks and shelves. Unlike typical layout estimation methods, MVRackLay estimates multi-layered layouts, wherein each layer corresponds to the layout of a shelf within a rack. Given a sequence of images of a warehouse scene, a dual-headed Convolutional-LSTM architecture outputs segmented racks, the front and the top view layout of each shelf within a rack. With minimal effort, such an output is transformed into a 3D rendering of all racks, shelves and objects on the shelves, giving an accurate 3D depiction of the entire warehouse scene in terms of racks, shelves and the number of objects on each shelf. MVRackLay generalizes to a diverse set of warehouse scenes with varying number of objects on each shelf, number of shelves and in the presence of other such racks in the background. Further, MVRackLay shows superior performance vis-a-vis its single view counterpart, RackLay, in layout accuracy, quantized in terms of the mean IoU and mAP metrics. We also showcase a multi-view stitching of the 3D layouts resulting in a representation of the warehouse scene with respect to a global reference frame akin to a rendering of the scene from a SLAM pipeline. To the best of our knowledge, this is the first such work to portray a 3D rendering of a warehouse scene in terms of its semantic components - Racks, Shelves and Objects - all from a single monocular camera.
translated by 谷歌翻译
在本文中,我们使用称为BSGD(块随机梯度下降)的非常通用的公式研究凸优化。在每次迭代中,有些但没有必要的参数所有组件都会更新。更新的方向可以是两种可能性之一:(i)使用一阶近似计算的噪声浪费的测量,或(ii)使用可能被噪声损坏的函数值计算的近似梯度。该公式包含大多数当前使用的随机梯度方法。我们基于随机近似理论,建立了BSGD收敛到全局最小值的条件。然后,我们通过数值实验来验证预测的收敛性。结果结果表明,当使用近似梯度时,BSGD会收敛,而基于动量的方法可能会差异。但是,不仅是我们的BSGD,还包括标准(全级别)梯度下降,以及各种基于动量的方法,即使有嘈杂的梯度也收敛。
translated by 谷歌翻译
Duckiebots是低成本的移动机器人,在研究和教育领域广泛使用。尽管Duckietown平台有现有的自动驾驶算法,但它们要么太复杂,要么表现太差,无法导航多车道轨道。此外,必须将内存和计算资源提供给Duckiebot,以便它可以执行其他任务,例如分布式输入检测。为了满足这些约束,我们构建了一种低成本的自主驾驶算法,能够在两车道轨道上驾驶。该算法使用传统的计算机视觉技术来识别轨道上的中央车道并获得相关的转向角度。然后,转向由PID控制器控制,该PID控制器使Duckiebot的运动平滑。将算法的性能与Neurips 2018 AI驾驶奥运会(AIDO)决赛入围者进行了比较,并且除了一名决赛选手以外,它的表现优于所有球员。我们算法的两个主要贡献是其低计算要求和非常快速的设置,并持续努力使其更加可靠。
translated by 谷歌翻译
Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the "pre-train and fine-tune" paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.
translated by 谷歌翻译
对于准确的模型,需要更少的数据,很少有射击学习表现出许多应用程序域中的鲁棒性和通用性。但是,在不信任的环境中部署少量模型可能会引起隐私问题,例如攻击或对手可能会违反用户提供的数据的隐私。本文通过建立一种新颖的隐私保存嵌入空间来维护数据的隐私空间,从而在不信任的环境中研究了少量学习的隐私增强,从而保留了数据的隐私并保持模型的准确性。我们研究了各种图像隐私方法的影响,例如模糊,像素化,高斯噪声和差异化私有像素化(DP-PIX)对几个图像分类的影响,并提出了一种通过关节损失学习隐私表示表示的方法。经验结果表明,如何为隐私增强的少数学习而谈判如何进行隐私性折衷。
translated by 谷歌翻译
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data? Prior attempts at measuring leakage of MLMs via membership inference attacks have been inconclusive, implying the potential robustness of MLMs to privacy attacks. In this work, we posit that prior attempts were inconclusive because they based their attack solely on the MLM's model score. We devise a stronger membership inference attack based on likelihood ratio hypothesis testing that involves an additional reference MLM to more accurately quantify the privacy risks of memorization in MLMs. We show that masked language models are extremely susceptible to likelihood ratio membership inference attacks: Our empirical results, on models trained on medical notes, show that our attack improves the AUC of prior membership inference attacks from 0.66 to an alarmingly high 0.90 level, with a significant improvement in the low-error region: at 1% false positive rate, our attack is 51X more powerful than prior work.
translated by 谷歌翻译
加强学习(RL)提供了通过试验和错误学习的自然主义框架,这是由于其简单和有效性,并且由于其与人类和动物如何通过经验获得技能。然而,现实世界的体现学习,例如由人类和动物执行的,位于持续的非剧目世界中,而RL中的共同基准任务是epiSodic,在试验之间重置的环境以提供多次尝试。当尝试采取为ePiSodic模拟环境开发的RL算法并在现实世界平台上运行时,这种差异呈现出一项重大挑战,如机器人。在本文中,我们的目标是通过为自主强化学习(ARL)框架(ARL)提供框架来解决这一差异:加强学习的代理商不仅通过自己的经验学习,而且还争夺缺乏人类监督在试验之间重置。我们在此框架上介绍了一个模拟的基准伯爵,其中包含一系列多样化和具有挑战性的模拟任务,这些任务反映了所引入学习的障碍,当只有最小的对外在干预的依赖性时,可以假设。我们表明,作为干预措施的剧集RL和现有方法斗争的标准方法最小化,强调了对强化学习开发新算法的需求,更加注重自主。
translated by 谷歌翻译
神经网络的最新进步已经解决了常见的图表问题,例如链路预测,节点分类,节点聚类,通过将实体和关系的嵌入和关系开发到向量空间中来看。绘图嵌入式对图中存在的结构信息进行编码。然后,编码嵌入式可用于预测图中的缺失链接。然而,获得图表的最佳嵌入可以是嵌入式系统中的计算具有挑战性的任务。我们在这项工作中专注的两种技术是1)节点嵌入来自随机步行的方法和2)知识图形嵌入。随机播放的嵌入物是计算地廉价的,但是是次优的,而知识图形嵌入物表现更好,但是计算得昂贵。在这项工作中,我们研究了转换从基于随机步行方法获得的节点嵌入的转换模型,以直接从知识图方法获得的嵌入,而不会增加计算成本。广泛的实验表明,所提出的变换模型可用于实时解决链路预测。
translated by 谷歌翻译